Sparse trace norm regularization

نویسندگان
چکیده

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Sparse Trace Norm Regularization

We study the problem of estimating multiple predictive functions from a dictionary of basis functions in the nonparametric regression setting. Our estimation scheme assumes that each predictive function can be estimated in the form of a linear combination of the basis functions. By assuming that the coefficient matrix admits a sparse low-rank structure, we formulate the function estimation prob...

متن کامل

On Trace Norm Regularization for Document Modeling

We propose a model for text. At the lowest level, we use a simple language model which (at best) can only capture local structure. However, the model allows for these lowest-level components to be structured so that higher-level ordering and grouping which is present in the data may be reflected in the model. 1 The Data We assume that the text data is made up of a set of units. We represent the...

متن کامل

Trace Lasso: a trace norm regularization for correlated designs

Using the `1-norm to regularize the estimation of the parameter vector of a linear model leads to an unstable estimator when covariates are highly correlated. In this paper, we introduce a new penalty function which takes into account the correlation of the design matrix to stabilize the estimation. This norm, called the trace Lasso, uses the trace norm of the selected covariates, which is a co...

متن کامل

Sparse Portfolio Selection via Quasi-Norm Regularization

to obtain an approximate second-order KKT solution of the `p-norm models in polynomial time with a fixed error tolerance, and then test our `p-norm models on CRSP(1992-2013) and also S&P 500 (2008-2012) data. The empirical results illustrate that our `p-norm regularized models can generate portfolios of any desired sparsity with portfolio variance, portfolio return and Sharpe Ratio comparable t...

متن کامل

Lifted coordinate descent for learning with trace-norm regularization

We consider the minimization of a smooth loss with trace-norm regularization, which is a natural objective in multi-class and multitask learning. Even though the problem is convex, existing approaches rely on optimizing a non-convex variational bound, which is not guaranteed to converge, or repeatedly perform singular-value decomposition, which prevents scaling beyond moderate matrix sizes. We ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Computational Statistics

سال: 2013

ISSN: 0943-4062,1613-9658

DOI: 10.1007/s00180-013-0440-7